用于皮肤病变分析的计算机辅助诊断系统(CAD)系统是一个新兴的研究领域,有可能减轻皮肤癌筛查的负担和成本。研究人员最近表示,对开发此类CAD系统的兴趣日益增加,目的是向皮肤科医生提供用户友好的工具,以减少手动检查提出的挑战。本文的目的是提供对2011年至2020年之间发表的尖端CAD技术的完整文献综述。使用系统评价和荟萃分析的首选报告项目(PRISMA)方法用于确定总共365个出版物,221用于皮肤病变细分,144用于皮肤病变分类。这些文章以多种不同的方式进行分析和汇总,以便我们可以贡献有关CAD系统发展方法的重要信息。这些方式包括:相关和基本的定义和理论,输入数据(数据集利用,预处理,增强和解决不平衡问题),方法配置(技术,体系结构,模块框架和损失),培训策略(超级表格设置)以及评估(评估)标准(指标)。我们还打算研究各种增强性能的方法,包括合奏和后处理。此外,在这项调查中,我们强调了使用最小数据集评估皮肤病变细分和分类系统的主要问题,以及对这些困境的潜在解决方案。总之,讨论了启发性发现,建议和趋势,目的是为了在关注的相关领域进行未来的研究监视。可以预见的是,它将在开发自动化和健壮的CAD系统进行皮肤病变分析的过程中指导从初学者到专家的各个级别的研究人员。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
translated by 谷歌翻译
Model counting is a fundamental problem which has been influential in many applications, from artificial intelligence to formal verification. Due to the intrinsic hardness of model counting, approximate techniques have been developed to solve real-world instances of model counting. This paper designs a new anytime approach called PartialKC for approximate model counting. The idea is a form of partial knowledge compilation to provide an unbiased estimate of the model count which can converge to the exact count. Our empirical analysis demonstrates that PartialKC achieves significant scalability and accuracy over prior state-of-the-art approximate counters, including satss and STS. Interestingly, the empirical results show that PartialKC reaches convergence for many instances and therefore provides exact model counting performance comparable to state-of-the-art exact counters.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
A central problem in computational biophysics is protein structure prediction, i.e., finding the optimal folding of a given amino acid sequence. This problem has been studied in a classical abstract model, the HP model, where the protein is modeled as a sequence of H (hydrophobic) and P (polar) amino acids on a lattice. The objective is to find conformations maximizing H-H contacts. It is known that even in this reduced setting, the problem is intractable (NP-hard). In this work, we apply deep reinforcement learning (DRL) to the two-dimensional HP model. We can obtain the conformations of best known energies for benchmark HP sequences with lengths from 20 to 50. Our DRL is based on a deep Q-network (DQN). We find that a DQN based on long short-term memory (LSTM) architecture greatly enhances the RL learning ability and significantly improves the search process. DRL can sample the state space efficiently, without the need of manual heuristics. Experimentally we show that it can find multiple distinct best-known solutions per trial. This study demonstrates the effectiveness of deep reinforcement learning in the HP model for protein folding.
translated by 谷歌翻译
与传统的详尽搜索相反,选择性搜索第一群集文档将文档分为几个组,然后通过查询对所有文档进行详尽的搜索,以限制在一个组或仅几组中执行的搜索。选择性搜索旨在减少现代大规模搜索系统中的延迟和计算。在这项研究中,我们提出了MICO,这是一个使用搜索日志的最小监督,用于选择性搜索的相互信息共同培训框架。经过培训,MICO不仅会将文档聚集,还可以将看不见的查询路由到相关群集以进行有效检索。在我们的经验实验中,MICO显着提高了选择性搜索的多个指标的性能,并且超过了许多现有的竞争基线。
translated by 谷歌翻译
由于受试者辍学或扫描失败,在纵向研究中不可避免地扫描是不可避免的。在本文中,我们提出了一个深度学习框架,以预测获得的扫描中缺少扫描,从而迎合纵向婴儿研究。由于快速的对比和结构变化,特别是在生命的第一年,对婴儿脑MRI的预测具有挑战性。我们引入了值得信赖的变质生成对抗网络(MGAN),用于将婴儿脑MRI从一个时间点转换为另一个时间点。MGAN具有三个关键功能:(i)图像翻译利用空间和频率信息以进行详细信息提供映射;(ii)将注意力集中在具有挑战性地区的质量指导学习策略。(iii)多尺度杂种损失函数,可改善组织对比度和结构细节的翻译。实验结果表明,MGAN通过准确预测对比度和解剖学细节来优于现有的gan。
translated by 谷歌翻译
复发性喉神经(RLN)的肿瘤浸润是机器人甲状腺切除术的禁忌症,很难通过标准喉镜检测。超声(US)是RLN检测的可行替代方法,因为其安全性和提供实时反馈的能力。但是,直径通常小于3mm的RLN的微小性对RLN的准确定位构成了重大挑战。在这项工作中,我们为RLN本地化提出了一个知识驱动的框架,模仿了外科医生根据其周围器官识别RLN的标准方法。我们基于器官之间固有的相对空间关系构建了先前的解剖模型。通过贝叶斯形状比对(BSA),我们获得了围绕RLN的感兴趣区域(ROI)中心的候选坐标。 ROI允许使用基于多尺度语义信息的双路径识别网络确定RLN的精制质心的视野减少。实验结果表明,与最先进的方法相比,所提出的方法达到了较高的命中率和距离较小的距离误差。
translated by 谷歌翻译
主观认知下降(SCD)是阿尔茨海默氏病(AD)的临床前阶段,甚至在轻度认知障碍(MCI)之前就发生。渐进式SCD将转换为MCI,并有可能进一步发展为AD。因此,通过神经成像技术(例如,结构MRI)对进行性SCD的早期鉴定对于AD的早期干预具有巨大的临床价值。但是,现有的基于MRI的机器/深度学习方法通​​常会遇到小样本大小的问题,这对相关的神经影像学分析构成了巨大挑战。我们旨在解决本文的主要问题是如何利用相关领域(例如AD/NC)协助SCD的进展预测。同时,我们担心哪些大脑区域与进行性SCD的识别更加紧密相关。为此,我们提出了一个注意引导自动编码器模型,以进行有效的跨域适应,以促进知识转移从AD到SCD。所提出的模型由四个关键组成部分组成:1)用于学习不同域的共享子空间表示的功能编码模块,2)用于自动定义大脑中定义的兴趣障碍区域的注意模块,3)用于重构的解码模块原始输入,4)用于鉴定脑疾病的分类模块。通过对这四个模块的联合培训,可以学习域不变功能。同时,注意机制可以强调与脑部疾病相关的区域。公开可用的ADNI数据集和私人CLAS数据集的广泛实验证明了该方法的有效性。提出的模型直接可以在CPU上仅5-10秒进行训练和测试,并且适用于具有小数据集的医疗任务。
translated by 谷歌翻译
深度学习方法在图像染色中优于传统方法。为了生成上下文纹理,研究人员仍在努力改进现有方法,并提出可以提取,传播和重建类似于地面真实区域的特征的模型。此外,更深层的缺乏高质量的特征传递机制有助于对所产生的染色区域有助于持久的像差。为了解决这些限制,我们提出了V-Linknet跨空间学习策略网络。为了改善语境化功能的学习,我们设计了一种使用两个编码器的损失模型。此外,我们提出了递归残留过渡层(RSTL)。 RSTL提取高电平语义信息并将其传播为下层。最后,我们将在与不同面具的同一面孔和不同面部面上的相同面上进行了比较的措施。为了提高图像修复再现性,我们提出了一种标准协议来克服各种掩模和图像的偏差。我们使用实验方法调查V-LinkNet组件。当使用标准协议时,在Celeba-HQ上评估时,我们的结果超越了现有技术。此外,我们的模型可以在Paris Street View上评估时概括良好,以及具有标准协议的Parume2数据集。
translated by 谷歌翻译